Building Machine Learning Systems with Python Master the art of machine learning with Python and build effective machine learning systems with this intensive hands-on guide Willi Richert Luis Pedro Coelho BIRMINGHAM - MUMBAI Building Machine Learning Systems with Python Copyright © 2013 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2013 Production Reference: 1200713 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78216-140-0 www.packtpub.com Cover Image by Asher Wishkerman ( a.wishkerman@mpic.de ) Credits Authors Willi Richert Luis Pedro Coelho Reviewers Matthieu Brucher Mike Driscoll Maurice HT Ling Acquisition Editor Kartikey Pandey Lead Technical Editor Mayur Hule Technical Editors Sharvari H. Baet Ruchita Bhansali Athira Laji Zafeer Rais Copy Editors Insiya Morbiwala Aditya Nair Alfida Paiva Laxmi Subramanian Project Coordinator Anurag Banerjee Proofreader Paul Hindle Indexer Tejal R. Soni Graphics Abhinash Sahu Production Coordinator Aditi Gajjar Cover Work Aditi Gajjar About the Authors Willi Richert has a PhD in Machine Learning and Robotics, and he currently works for Microsoft in the Core Relevance Team of Bing, where he is involved in a variety of machine learning areas such as active learning and statistical machine translation. This book would not have been possible without the support of my wife Natalie and my sons Linus and Moritz. I am also especially grateful for the many fruitful discussions with my current and previous managers, Andreas Bode, Clemens Marschner, Hongyan Zhou, and Eric Crestan, as well as my colleagues and friends, Tomasz Marciniak, Cristian Eigel, Oliver Niehoerster, and Philipp Adelt. The interesting ideas are most likely from them; the bugs belong to me. Luis Pedro Coelho is a Computational Biologist: someone who uses computers as a tool to understand biological systems. Within this large field, Luis works in Bioimage Informatics, which is the application of machine learning techniques to the analysis of images of biological specimens. His main focus is on the processing of large scale image data. With robotic microscopes, it is possible to acquire hundreds of thousands of images in a day, and visual inspection of all the images becomes impossible. Luis has a PhD from Carnegie Mellon University, which is one of the leading universities in the world in the area of machine learning. He is also the author of several scientific publications. Luis started developing open source software in 1998 as a way to apply to real code what he was learning in his computer science courses at the Technical University of Lisbon. In 2004, he started developing in Python and has contributed to several open source libraries in this language. He is the lead developer on mahotas, the popular computer vision package for Python, and is the contributor of several machine learning codes. I thank my wife Rita for all her love and support, and I thank my daughter Anna for being the best thing ever. About the Reviewers Matthieu Brucher holds an Engineering degree from the Ecole Superieure d'Electricite (Information, Signals, Measures), France, and has a PhD in Unsupervised Manifold Learning from the Universite de Strasbourg, France. He currently holds an HPC Software Developer position in an oil company and works on next generation reservoir simulation. Mike Driscoll has been programming in Python since Spring 2006. He enjoys writing about Python on his blog at http://www.blog.pythonlibrary.org/ . Mike also occasionally writes for the Python Software Foundation, i-Programmer, and Developer Zone. He enjoys photography and reading a good book. Mike has also been a technical reviewer for the following Packt Publishing books: Python 3 Object Oriented Programming , Python 2.6 Graphics Cookbook , and Python Web Development Beginner's Guide I would like to thank my wife, Evangeline, for always supporting me. I would also like to thank my friends and family for all that they do to help me. And I would like to thank Jesus Christ for saving me. Maurice HT Ling completed his PhD. in Bioinformatics and BSc (Hons) in Molecular and Cell Biology at the University of Melbourne. He is currently a research fellow at Nanyang Technological University, Singapore, and an honorary fellow at the University of Melbourne, Australia. He co-edits the Python papers and has co-founded the Python User Group (Singapore), where he has served as vice president since 2010. His research interests lie in life—biological life, artificial life, and artificial intelligence—using computer science and statistics as tools to understand life and its numerous aspects. You can find his website at: http://maurice.vodien.com www.PacktPub.com Support files, eBooks, discount offers and more You might want to visit www.PacktPub.com for support files and downloads related to your book. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at service@packtpub.com for more details. At www.PacktPub.com , you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks. TM http://PacktLib.PacktPub.com Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books. Why Subscribe? • Fully searchable across every book published by Packt • Copy and paste, print and bookmark content • On demand and accessible via web browser Free Access for Packt account holders If you have an account with Packt at www.PacktPub.com , you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access. Table of Contents Preface 1 Chapter 1: Getting Started with Python Machine Learning 7 Machine learning and Python – the dream team 8 What the book will teach you (and what it will not) 9 What to do when you are stuck 10 Getting started 11 Introduction to NumPy, SciPy, and Matplotlib 12 Installing Python 12 Chewing data efficiently with NumPy and intelligently with SciPy 12 Learning NumPy 13 Indexing 15 Handling non-existing values 15 Comparing runtime behaviors 16 Learning SciPy 17 Our first (tiny) machine learning application 19 Reading in the data 19 Preprocessing and cleaning the data 20 Choosing the right model and learning algorithm 22 Before building our first model 22 Starting with a simple straight line 22 Towards some advanced stuff 24 Stepping back to go forward – another look at our data 26 Training and testing 28 Answering our initial question 30 Summary 31 Chapter 2: Learning How to Classify with Real-world Examples 33 The Iris dataset 33 The first step is visualization 34 Building our first classification model 35 Evaluation – holding out data and cross-validation 38 Table of Contents [ ii ] Building more complex classifiers 40 A more complex dataset and a more complex classifier 41 Learning about the Seeds dataset 42 Features and feature engineering 43 Nearest neighbor classification 44 Binary and multiclass classification 47 Summary 48 Chapter 3: Clustering – Finding Related Posts 49 Measuring the relatedness of posts 50 How not to do it 50 How to do it 51 Preprocessing – similarity measured as similar number of common words 51 Converting raw text into a bag-of-words 52 Counting words 53 Normalizing the word count vectors 56 Removing less important words 56 Stemming 57 Installing and using NLTK 58 Extending the vectorizer with NLTK's stemmer 59 Stop words on steroids 60 Our achievements and goals 61 Clustering 62 KMeans 63 Getting test data to evaluate our ideas on 65 Clustering posts 67 Solving our initial challenge 68 Another look at noise 71 Tweaking the parameters 72 Summary 73 Chapter 4: Topic Modeling 75 Latent Dirichlet allocation (LDA) 75 Building a topic model 76 Comparing similarity in topic space 80 Modeling the whole of Wikipedia 83 Choosing the number of topics 86 Summary 87 Chapter 5: Classification – Detecting Poor Answers 89 Sketching our roadmap 90 Learning to classify classy answers 90 Table of Contents [ iii ] Tuning the instance 90 Tuning the classifier 90 Fetching the data 91 Slimming the data down to chewable chunks 92 Preselection and processing of attributes 93 Defining what is a good answer 94 Creating our first classifier 95 Starting with the k-nearest neighbor (kNN) algorithm 95 Engineering the features 96 Training the classifier 97 Measuring the classifier's performance 97 Designing more features 98 Deciding how to improve 101 Bias-variance and its trade-off 102 Fixing high bias 102 Fixing high variance 103 High bias or low bias 103 Using logistic regression 105 A bit of math with a small example 106 Applying logistic regression to our postclassification problem 108 Looking behind accuracy – precision and recall 110 Slimming the classifier 114 Ship it! 115 Summary 115 Chapter 6: Classification II – Sentiment Analysis 117 Sketching our roadmap 117 Fetching the Twitter data 118 Introducing the Naive Bayes classifier 118 Getting to know the Bayes theorem 119 Being naive 120 Using Naive Bayes to classify 121 Accounting for unseen words and other oddities 124 Accounting for arithmetic underflows 125 Creating our first classifier and tuning it 127 Solving an easy problem first 128 Using all the classes 130 Tuning the classifier's parameters 132 Cleaning tweets 136 Taking the word types into account 138 Determining the word types 139 Table of Contents [ iv ] Successfully cheating using SentiWordNet 141 Our first estimator 143 Putting everything together 145 Summary 146 Chapter 7: Regression – Recommendations 147 Predicting house prices with regression 147 Multidimensional regression 151 Cross-validation for regression 151 Penalized regression 153 L1 and L2 penalties 153 Using Lasso or Elastic nets in scikit-learn 154 P greater than N scenarios 155 An example based on text 156 Setting hyperparameters in a smart way 158 Rating prediction and recommendations 159 Summary 163 Chapter 8: Regression – Recommendations Improved 165 Improved recommendations 165 Using the binary matrix of recommendations 166 Looking at the movie neighbors 168 Combining multiple methods 169 Basket analysis 172 Obtaining useful predictions 173 Analyzing supermarket shopping baskets 173 Association rule mining 176 More advanced basket analysis 178 Summary 179 Chapter 9: Classification III – Music Genre Classification 181 Sketching our roadmap 181 Fetching the music data 182 Converting into a wave format 182 Looking at music 182 Decomposing music into sine wave components 184 Using FFT to build our first classifier 186 Increasing experimentation agility 186 Training the classifier 187 Using the confusion matrix to measure accuracy in multiclass problems 188 An alternate way to measure classifier performance using receiver operator characteristic (ROC) 190 Table of Contents [ v ] Improving classification performance with Mel Frequency Cepstral Coefficients 193 Summary 197 Chapter 10: Computer Vision – Pattern Recognition 199 Introducing image processing 199 Loading and displaying images 200 Basic image processing 201 Thresholding 202 Gaussian blurring 205 Filtering for different effects 207 Adding salt and pepper noise 207 Putting the center in focus 208 Pattern recognition 210 Computing features from images 211 Writing your own features 212 Classifying a harder dataset 215 Local feature representations 216 Summary 219 Chapter 11: Dimensionality Reduction 221 Sketching our roadmap 222 Selecting features 222 Detecting redundant features using filters 223 Correlation 223 Mutual information 225 Asking the model about the features using wrappers 230 Other feature selection methods 232 Feature extraction 233 About principal component analysis (PCA) 233 Sketching PCA 234 Applying PCA 234 Limitations of PCA and how LDA can help 236 Multidimensional scaling (MDS) 237 Summary 240 Chapter 12: Big(ger) Data 241 Learning about big data 241 Using jug to break up your pipeline into tasks 242 About tasks 242 Reusing partial results 245 Looking under the hood 246 Using jug for data analysis 246 Table of Contents [ vi ] Using Amazon Web Services (AWS) 248 Creating your first machines 250 Installing Python packages on Amazon Linux 253 Running jug on our cloud machine 254 Automating the generation of clusters with starcluster 255 Summary 259 Appendix: Where to Learn More about Machine Learning 261 Online courses 261 Books 261 Q&A sites 262 Blogs 262 Data sources 263 Getting competitive 263 What was left out 264 Summary 264 Index 265 Preface You could argue that it is a fortunate coincidence that you are holding this book in your hands (or your e-book reader). After all, there are millions of books printed every year, which are read by millions of readers; and then there is this book read by you. You could also argue that a couple of machine learning algorithms played their role in leading you to this book (or this book to you). And we, the authors, are happy that you want to understand more about the how and why. Most of this book will cover the how. How should the data be processed so that machine learning algorithms can make the most out of it? How should you choose the right algorithm for a problem at hand? Occasionally, we will also cover the why. Why is it important to measure correctly? Why does one algorithm outperform another one in a given scenario? We know that there is much more to learn to be an expert in the field. After all, we only covered some of the "hows" and just a tiny fraction of the "whys". But at the end, we hope that this mixture will help you to get up and running as quickly as possible. What this book covers Chapter 1 , Getting Started with Python Machine Learning , introduces the basic idea of machine learning with a very simple example. Despite its simplicity, it will challenge us with the risk of overfitting. Chapter 2 , Learning How to Classify with Real-world Examples , explains the use of real data to learn about classification, whereby we train a computer to be able to distinguish between different classes of flowers. Chapter 3 , Clustering – Finding Related Posts , explains how powerful the bag-of-words approach is when we apply it to finding similar posts without really understanding them. Preface [ 2 ] Chapter 4 , Topic Modeling, takes us beyond assigning each post to a single cluster and shows us how assigning them to several topics as real text can deal with multiple topics. Chapter 5 , Classification – Detecting Poor Answers , explains how to use logistic regression to find whether a user's answer to a question is good or bad. Behind the scenes, we will learn how to use the bias-variance trade-off to debug machine learning models. Chapter 6 , Classification II – Sentiment Analysis , introduces how Naive Bayes works, and how to use it to classify tweets in order to see whether they are positive or negative. Chapter 7 , Regression – Recommendations , discusses a classical topic in handling data, but it is still relevant today. We will use it to build recommendation systems, a system that can take user input about the likes and dislikes to recommend new products. Chapter 8 , Regression – Recommendations Improved , improves our recommendations by using multiple methods at once. We will also see how to build recommendations just from shopping data without the need of rating data (which users do not always provide). Chapter 9 , Classification III – Music Genre Classification , illustrates how if someone has scrambled our huge music collection, then our only hope to create an order is to let a machine learner classify our songs. It will turn out that it is sometimes better to trust someone else's expertise than creating features ourselves. Chapter 10 , Computer Vision – Pattern Recognition , explains how to apply classifications in the specific context of handling images, a field known as pattern recognition. Chapter 11 , Dimensionality Reduction , teaches us what other methods exist that can help us in downsizing data so that it is chewable by our machine learning algorithms. Chapter 12 , Big(ger) Data , explains how data sizes keep getting bigger, and how this often becomes a problem for the analysis. In this chapter, we explore some approaches to deal with larger data by taking advantage of multiple core or computing clusters. We also have an introduction to using cloud computing (using Amazon's Web Services as our cloud provider). Appendix, Where to Learn More about Machine Learning , covers a list of wonderful resources available for machine learning. Preface [ 3 ] What you need for this book This book assumes you know Python and how to install a library using easy_install or pip . We do not rely on any advanced mathematics such as calculus or matrix algebra. To summarize it, we are using the following versions throughout this book, but you should be fine with any more recent one: • Python: 2.7 • NumPy: 1.6.2 • SciPy: 0.11 • Scikit-learn: 0.13 Who this book is for This book is for Python programmers who want to learn how to perform machine learning using open source libraries. We will walk through the basic modes of machine learning based on realistic examples. This book is also for machine learners who want to start using Python to build their systems. Python is a flexible language for rapid prototyping, while the underlying algorithms are all written in optimized C or C++. Therefore, the resulting code is fast and robust enough to be usable in production as well. Conventions In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning. Code words in text are shown as follows: "We can include other contexts through the use of the include directive". A block of code is set as follows: def nn_movie(movie_likeness, reviews, uid, mid): likes = movie_likeness[mid].argsort() # reverse the sorting so that most alike are in # beginning likes = likes[::-1] # returns the rating for the most similar movie available for ell in likes: if reviews[u,ell] > 0: return reviews[u,ell] Preface [ 4 ] When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: def nn_movie(movie_likeness, reviews, uid, mid): likes = movie_likeness[mid].argsort() # reverse the sorting so that most alike are in # beginning likes = likes[::-1] # returns the rating for the most similar movie available for ell in likes: if reviews[u,ell] > 0: return reviews[u,ell] New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "clicking on the Next button moves you to the next screen". Warnings or important notes appear in a box like this. Tips and tricks appear like this. Reader feedback Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of. To send us general feedback, simply send an e-mail to feedback@packtpub.com , and mention the book title via the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors Customer support Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase. Preface [ 5 ] Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com . If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Errata Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub. com/submit-errata , selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support Piracy Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at copyright@packtpub.com with a link to the suspected pirated material. We appreciate your help in protecting our authors, and our ability to bring you valuable content. Questions You can contact us at questions@packtpub.com if you are having a problem with any aspect of the book, and we will do our best to address it.